Custom Integration
Calibo Accelerate supports custom transformation jobs that provide the ability to design flexible data integration pipelines using your own code. This is ideal for scenarios where templates may not cover specific logic, format conversions, or domain-specific processing requirements.
The code for custom jobs is written in PySpark or SQL, and executed on Databricks clusters. You can package dependencies as wheel files for reusable deployment across multiple pipelines.
The Calibo Accelerate platform supports the following technologies for custom integration:
-
Databricks custom integration
What's next? Create a Data Pipeline |